Recent efforts in Neural Rendering Fields (NeRF) have shown impressive results on novel view synthesis by utilizing implicit neural representation to represent 3D scenes. Due to the process of volumetric rendering, the inference speed for NeRF is extremely slow, limiting the application scenarios of utilizing NeRF on resource-constrained hardware, such as mobile devices. Many works have been conducted to reduce the latency of running NeRF models. However, most of them still require high-end GPU for acceleration or extra storage memory, which is all unavailable on mobile devices. Another emerging direction utilizes the neural light field (NeLF) for speedup, as only one forward pass is performed on a ray to predict the pixel color. Nevertheless, to reach a similar rendering quality as NeRF, the network in NeLF is designed with intensive computation, which is not mobile-friendly. In this work, we propose an efficient network that runs in real-time on mobile devices for neural rendering. We follow the setting of NeLF to train our network. Unlike existing works, we introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size, i.e., saving $15\times \sim 24\times$ storage compared with MobileNeRF. Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes on mobile devices, e.g., $18.04$ms (iPhone 13) for rendering one $1008\times756$ image of real 3D scenes. Additionally, we achieve similar image quality as NeRF and better quality than MobileNeRF (PSNR $26.15$ vs. $25.91$ on the real-world forward-facing dataset).
translated by 谷歌翻译
Can we make virtual characters in a scene interact with their surrounding objects through simple instructions? Is it possible to synthesize such motion plausibly with a diverse set of objects and instructions? Inspired by these questions, we present the first framework to synthesize the full-body motion of virtual human characters performing specified actions with 3D objects placed within their reach. Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters and outputs diverse sequences of full-body motions. This is in contrast to existing work, where full-body action synthesis methods generally do not consider object interactions, and human-object interaction methods focus mainly on synthesizing hand or finger movements for grasping objects. We accomplish our objective by designing an intent-driven full-body motion generator, which uses a pair of decoupled conditional variational autoencoders (CVAE) to learn the motion of the body parts in an autoregressive manner. We also optimize for the positions of the objects with six degrees of freedom (6DoF) such that they plausibly fit within the hands of the synthesized characters. We compare our proposed method with the existing methods of motion synthesis and establish a new and stronger state-of-the-art for the task of intent-driven motion synthesis. Through a user study, we further show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios compared to the current state-of-the-art methods, and are perceived to be as good as the ground truth on several occasions.
translated by 谷歌翻译
Conventional methods for human motion synthesis are either deterministic or struggle with the trade-off between motion diversity and motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can generate long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion editing applications -- like inbetweening, seed conditioning, and text-based editing -- thus, providing crucial abilities for virtual character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state of the art on established benchmarks in the literature. We urge the reader to watch our supplementary video and visit https://vcai.mpi-inf.mpg.de/projects/MoFusion.
translated by 谷歌翻译
3D reconstruction and novel view synthesis of dynamic scenes from collections of single views recently gained increased attention. Existing work shows impressive results for synthetic setups and forward-facing real-world data, but is severely limited in the training speed and angular range for generating novel views. This paper addresses these limitations and proposes a new method for full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At the core of our method are: 1) An efficient deformation module that decouples the processing of spatial and temporal information for acceleration at training and inference time; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field. We evaluate the proposed approach on the established synthetic D-NeRF benchmark, that enables efficient reconstruction from a single monocular view per time-frame randomly sampled from a full hemisphere. We refer to this form of inputs as monocularized data. To prove its practicality for real-world scenarios, we recorded twelve challenging sequences with human actors by sampling single frames from a synchronized multi-view rig. In both cases, our method is trained significantly faster than previous methods (minutes instead of days) while achieving higher visual accuracy for generated novel views. Our source code and data is available at our project page https://graphics.tu-bs.de/publications/kappel2022fast.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
To analyze this characteristic of vulnerability, we developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IVOCT) images. A total of 8,403 IVOCT image frames from 85 lesions and 37 normal segments were analyzed. Manual annotation was done using a dedicated software (OCTOPUS) previously developed by our group. Data augmentation in the polar (r,{\theta}) domain was applied to raw IVOCT images to ensure that microvessels appear at all possible angles. Pre-processing methods included guidewire/shadow detection, lumen segmentation, pixel shifting, and noise reduction. DeepLab v3+ was used to segment microvessel candidates. A bounding box on each candidate was classified as either microvessel or non-microvessel using a shallow convolutional neural network. For better classification, we used data augmentation (i.e., angle rotation) on bounding boxes with a microvessel during network training. Data augmentation and pre-processing steps improved microvessel segmentation performance significantly, yielding a method with Dice of 0.71+/-0.10 and pixel-wise sensitivity/specificity of 87.7+/-6.6%/99.8+/-0.1%. The network for classifying microvessels from candidates performed exceptionally well, with sensitivity of 99.5+/-0.3%, specificity of 98.8+/-1.0%, and accuracy of 99.1+/-0.5%. The classification step eliminated the majority of residual false positives, and the Dice coefficient increased from 0.71 to 0.73. In addition, our method produced 698 image frames with microvessels present, compared to 730 from manual analysis, representing a 4.4% difference. When compared to the manual method, the automated method improved microvessel continuity, implying improved segmentation performance. The method will be useful for research purposes as well as potential future treatment planning.
translated by 谷歌翻译
多标签图像分类允许从给定图像预测一组标签。与多类分类不同,每个图像只有一个标签,此类设置适用于更广泛的应用程序。在这项工作中,我们重新审视了多标签分类的两种流行方法:基于变压器的头和标签关系信息信息图处理分支。尽管基于变压器的头被认为比基于图基的分支更好地取得了更好的结果,但我们认为,使用适当的训练策略,基于图形的方法可以证明精确度的较小,同时将计算资源减少到推理上。在我们的训练策略中,我们在角度空间中引入了其修饰作用,而不是非对称损失(ASL)(ASL),而不是非对称损失(ASL)。与二进制跨熵损失相比,它隐含地学习了每个班级单位超球的代理特征向量,从而提供更好的歧视能力。根据提出的损失和训练策略,我们在单个模态方法中获得SOTA结果,以广泛的多标签分类基准,例如MS-Coco,Pascal-Voc,Nus wide和Visual Genome 500。 OpenVino培训扩展https://github.com/openvinotoolkit/deep-object-reid/tree/tree/multilabel
translated by 谷歌翻译
从单眼RGB图像中捕获的3D人类运动捕获符合受试者与复杂且可能可变形的环境的相互作用的相互作用是一个非常具有挑战性,不足和探索不足的问题。现有方法仅薄弱地解决它,并且当人类与场景表面互动时,通常不会建模可能发生的表面变形。相比之下,本文提出了mocapdeform,即单眼3D人体运动捕获的新框架,该框架是第一个明确模拟3D场景的非刚性变形,以改善3D人体姿势估计和可变形环境的重建。 Mocapdeform接受单眼RGB视频,并在相机空间中对齐一个3D场景。它首先使用基于新的射线广播的策略将输入单眼视频中的主题以及密集的触点标签进行定位。接下来,我们的人类环境相互作用约束被利用以共同优化全局3D人类姿势和非刚性表面变形。 Mocapdeform比在几个数据集上的竞争方法获得了更高的精度,包括我们新记录的具有变形背景场景的方法。
translated by 谷歌翻译
词汇转移是一个转移学习子任务,在该子任务中,语言模型与语料库特异性令牌化而不是默认值进行微调,而默认值则在预训练期间使用。这通常会改善模型的产生性能,在本文中,我们证明词汇转移对医学文本处理特别有益。使用三个不同的医学自然语言处理数据集,我们显示词汇转移,为下游分类器精度提供多达十个百分点。
translated by 谷歌翻译
我们提出Unrealego,即,一种用于以Egentric 3D人类姿势估计的新的大规模自然主义数据集。Unrealego是基于配备两个鱼眼摄像机的眼镜的高级概念,可用于无约束的环境。我们设计了它们的虚拟原型,并将其附加到3D人体模型中以进行立体视图捕获。接下来,我们会产生大量的人类动作。结果,Unrealego是第一个在现有的EgeCentric数据集中提供最大动作的野外立体声图像的数据集。此外,我们提出了一种新的基准方法,其简单但有效的想法是为立体声输入设计2D关键点估计模块,以改善3D人体姿势估计。广泛的实验表明,我们的方法在定性和定量上优于先前的最新方法。Unrealego和我们的源代码可在我们的项目网页上找到。
translated by 谷歌翻译